skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "VanFossen, Andrew"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. As industries transition into the Industry 4.0 paradigm, the relevance and interest in concepts like Digital Twin (DT) are at an all-time high. DTs offer direct avenues for industries to make more accurate predictions, rational decisions, and informed plans, ultimately reducing costs, increasing performance and productivity. Adequate operation of DTs in the context of smart manufacturing relies on an evolving data-set relating to the real-life object or process, and a means of dynamically updating the computational model to better conform to the data. This reliance on data is made more explicit when physics-based computational models are not available or difficult to obtain in practice, as it's the case in most modern manufacturing scenarios. For data-based model surrogates to adequately represent the underlying physics, the number of training data points must keep pace with the number of degrees of freedom in the model, which can be on the order of thousands. However, in niche industrial scenarios like the one in manufacturing applications, the availability of data is limited (on the order of a few hundred data points, at best), mainly because a manual measuring process typically must take place for a few of the relevant quantities, e.g., level of wear of a tool. In other words, notwithstanding the popular notion of big-data, there is still a stark shortage of ground-truth data when examining, for instance, a complex system's path to failure. In this work we present a framework to alleviate this problem via modern machine learning tools, where we show a robust, efficient and reliable pathway to augment the available data to train the data-based computational models. Small sample size data is a key limitation in performance in machine learning, in particular with very high dimensional data. Current efforts for synthetic data generation typically involve either Generative Adversarial Networks (GANs) or Variational AutoEncoders (VAEs). These, however, are are tightly related to image processing and synthesis, and are generally not suited for sensor data generation, which is the type of data that manufacturing applications produce. Additionally, GAN models are susceptible to mode collapse, training instability, and high computational costs when used for high dimensional data creation. Alternatively, the encoding of VAEs greatly reduces dimensional complexity of data and can effectively regularize the latent space, but often produces poor representational synthetic samples. Our proposed method thus incorporates the learned latent space from an AutoEncoder (AE) architecture into the training of the generation network in a GAN. The advantages of such scheme are twofold: \textbf{(\textit{i})} the latent space representation created by the AE reduces the complexity of the distribution the generator must learn, allowing for quicker discriminator convergence, and \textbf{(\textit{ii})} the structure in the sensor data is better captured in the transition from the original space to the latent space. Through time statistics (up to the fifth moment), ARIMA coefficients and Fourier series coefficients, we compare the synthetic data from our proposed AE+GAN model with the original sensor data. We also show that the performance of our proposed method is at least comparable with that of the Riemannian Hamiltonian VAE, which is a recently published data augmentation framework specifically designed to handle very small high dimensional data sets. 
    more » « less
  2. This work presents an integrated architecture for a prognostic digital twin for smart manufacturing subsystems. The specific case of cutting tool wear (flank wear) in a CNC machine is considered, using benchmark data sets provided by the Prognostics and Health Management (PHM) Society. This paper emphasizes the role of robust uncertainty quantification, especially in the presence of data-driven black- and gray-box dynamic models. A surrogate dynamic model is constructed to track the evolution of flank wear using a reduced set of features extracted from multi-modal sensor time series data. The digital twin's uncertainty quantification engine integrates with this dynamic model along with a machine emulator that is tasked with generating future operating scenarios for the machine. The surrogate dynamic model and emulator are combined in a closed-loop architecture with an adaptive Monte Carlo uncertainty forecasting framework that allows prediction of quantities of interest critical to prognostics within user-prescribed bounds. Numerical results using the PHM dataset are shown illustrating how the adaptive uncertainty forecasting tools deliver a trustworthy forecast by maintaining predictive error within the prescribed tolerance. 
    more » « less