Abstract In droplet-on-demand liquid metal jetting (DoD-LMJ) additive manufacturing, complex physical interactions govern the droplet characteristics, such as size, velocity, and shape. These droplet characteristics, in turn, determine the functional quality of the printed parts. Hence, to ensure repeatable and reliable part quality it is necessary to monitor and control the droplet characteristics. Existing approaches for in-situ monitoring of droplet behavior in DoD-LMJ rely on high-speed imaging sensors. The resulting high volume of droplet images acquired is computationally demanding to analyze and hinders real-time control of the process. To overcome this challenge, the objective of this work is to use time series data acquired from an in-process millimeter-wave sensor for predicting the size, velocity, and shape characteristics of droplets in DoD-LMJ process. As opposed to high-speed imaging, this sensor produces data-efficient time series signatures that allows rapid, real-time process monitoring. We devise machine learning models that use the millimeter-wave sensor data to predict the droplet characteristics. Specifically, we developed multilayer perceptron-based non-linear autoregressive models to predict the size and velocity of droplets. Likewise, a supervised machine learning model was trained to classify the droplet shape using the frequency spectrum information contained in the millimeter-wave sensor signatures. High-speed imaging data served as ground truth for model training and validation. These models captured the droplet characteristics with a statistical fidelity exceeding 90%, and vastly outperformed conventional statistical modeling approaches. Thus, this work achieves a practically viable sensing approach for real-time quality monitoring of the DoD-LMJ process, in lieu of the existing data-intensive image-based techniques.
more »
« less
Matrix Profile Index Prediction for Streaming Time Series
Discovery and classification of motifs (repeated patterns) and discords (anomalies) in time series is fundamental to many scientific fields. These and related problems have effectively been solved for offline analysis of time series; however, these approaches are computationally intensive and do not lend themselves to streaming time series, such as those produced by IoT sensors, where the sampling rate imposes real-time constraints on computation and there is strong desire to locate computation as close as possible to the sensor. One promising solution is to use low-cost machine learning models to provide approximate answers to these problems. For example, prior work has trained models to predict the similarity of the most recently sampled window of data points to the time series used for training. This work addresses a more challenging problem, which is to predict not only the “strength” of the match, but also the relative location in the representative time series where the strongest matching subsequences occur. We evaluate our approach on two different real world datasets; we demonstrate speedups as high as about 30x compared to exact computations, with predictive accuracy as high as 87.95%, depending on the granularity of the prediction.
more »
« less
- PAR ID:
- 10282631
- Date Published:
- Journal Name:
- Workshop on ML for Systems at NeurIPS 2020
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
We consider a multispecies competition model in a one- and two-dimensional formulation. To solve the problem numerically, we construct a discrete system using finite volume approximation by space with semi-implicit time approximation. The solution of the multispecies competition model converges to the final equilibrium state that does not depend on the initial condition of the system. The final equilibrium state characterizes the survival status of the multispecies system (one or more species survive or no one survives). In real-world problems values of the parameters are unknown and vary in some range. For such problems, the series of Monte Carlo simulations can be used to estimate the system, where a large number of simulations are needed to be performed with random values of the parameters. A numerical solution is expensive, especially for high-dimensional problems, and requires a large amount of time to perform. In this work, to reduce the cost of simulations, we use a deep neural network to fast predict the survival status. Numerical results are presented for different neural network configurations. The comparison with convenient classifiers is presented.more » « less
-
null (Ed.)Predicting workload behavior during execution is essential for dynamic resource optimization of processor systems. Early studies used simple prediction algorithms such as a history tables. More recently, researchers have applied advanced machine learning regression techniques. Workload prediction can be cast as a time series forecasting problem. Time series forecasting is an active research area with recent advances that have not been studied in the context of workload prediction. In this paper, we first perform a comparative study of representative time series forecasting techniques to predict the dynamic workload of applications running on a CPU. We adapt state-of-the-art matrix profile and dynamic linear models (DLMs) not previously applied to workload prediction and compare them against traditional SVM and LSTM models that have been popular for handling non-stationary data. We find that all time series forecasting models struggle to predict abrupt workload changes. These changes occur because workloads go through phases, where prior work has studied workload phase detection, classification and prediction. We propose a novel approach that combines time series forecasting with phase prediction. We process each phase as a separate time series and train one forecasting model per phase. At runtime, forecasts from phase-specific models are selected and combined based on the predicted phase behavior. We apply our approach to forecasting of SPEC workloads running on a state-of-the-art Intel machine. Our results show that an LSTM-based phase-aware predictor can forecast workload CPI with less than 8% mean absolute error while reducing CPI error by more than 12% on average compared to a non-phase-aware approach.more » « less
-
Abstract As a companion work to [1], this article presents a series of simple formulae and explicit results that illustrate and highlight why classical variational phase-field models cannot possibly predict fracture nucleation in elastic brittle materials. The focus is on “tension-dominated” problems where all principal stresses are nonnegative, that is, problems taking place entirely within the first octant in the space of principal stresses.more » « less
-
Few-shot machine learning attempts to predict outputs given only a very small number of training examples. The key idea behind most few-shot learning approaches is to pre-train the model with a large number of instances from a different but related class of data, classes for which a large number of instances are available for training. Few-shot learning has been most successfully demonstrated for classification problems using Siamese deep learning neural networks. Few-shot learning is less extensively applied to time-series forecasting. Few-shot forecasting is the task of predicting future values of a time-series even when only a small set of historic time-series is available. Few-shot forecasting has applications in domains where a long history of data is not available. This work describes deep neural network architectures for few-shot forecasting. All the architectures use a Siamese twin network approach to learn a difference function between pairs of time-series, rather than directly forecasting based on historical data as seen in traditional forecasting models. The networks are built using Long short-term memory units (LSTM). During forecasting, a model is able to forecast time-series types that were never seen in the training data by using the few available instances of the new time-series type as reference inputs. The proposed architectures are evaluated on Vehicular traffic data collected in California from the Caltrans Performance Measurement System (PeMS). The models were trained with traffic flow data collected at specific locations and then are evaluated by predicting traffic at different locations at different time horizons (0 to 12 hours). The Mean Absolute Error (MAE) was used as the evaluation metric and also as the loss function for training. The proposed architectures show lower prediction error than a baseline nearest neighbor forecast model. The prediction error increases at longer time horizons.more » « less
An official website of the United States government

