skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Combining multitask and transfer learning with deep Gaussian processes for autotuning-based performance engineering
We combine deep Gaussian processes (DGPs) with multitask and transfer learning for the performance modeling and optimization of HPC applications. Deep Gaussian processes merge the uncertainty quantification advantage of Gaussian processes (GPs) with the predictive power of deep learning. Multitask and transfer learning allow for improved learning efficiency when several similar tasks are to be learned simultaneously and when previous learned models are sought to help in the learning of new tasks, respectively. A comparison with state-of-the-art autotuners shows the advantage of our approach on two application problems. In this article, we combine DGPs with multitask and transfer learning to allow for both an improved tuning of an application parameters on problems of interest but also the prediction of parameters on any potential problem the application might encounter.  more » « less
Award ID(s):
2004541
PAR ID:
10403983
Author(s) / Creator(s):
 ;  ;  
Publisher / Repository:
SAGE Publications
Date Published:
Journal Name:
The International Journal of High Performance Computing Applications
Volume:
37
Issue:
3-4
ISSN:
1094-3420
Format(s):
Medium: X Size: p. 229-244
Size(s):
p. 229-244
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Abstract The objective of this work is to reduce the cost of performing model-based sensitivity analysis for ultrasonic nondestructive testing systems by replacing the accurate physics-based model with machine learning (ML) algorithms and quickly compute Sobol’ indices. The ML algorithms considered in this work are neural networks (NNs), convolutional NN (CNNs), and deep Gaussian processes (DGPs). The performance of these algorithms is measured by the root mean-squared error on a fixed number of testing points and by the number of high-fidelity samples required to reach a target accuracy. The algorithms are compared on three ultrasonic testing benchmark cases with three uncertainty parameters, namely, spherically void defect under a focused and a planar transducer and spherical-inclusion defect under a focused transducer. The results show that NNs required 35, 100, and 35 samples for the three cases, respectively. CNNs required 35, 100, and 56, respectively, while DGPs required 84, 84, and 56, respectively. 
    more » « less
  2. Modern machine learning models require a large amount of labeled data for training to perform well. A recently emerging paradigm for reducing the reliance of large model training on massive labeled data is to take advantage of abundantly available labeled data from a related source task to boost the performance of the model in a desired target task where there may not be a lot of data available. This approach, which is called transfer learning, has been applied successfully in many application domains. However, despite the fact that many transfer learning algorithms have been developed, the fundamental understanding of "when" and "to what extent" transfer learning can reduce sample complexity is still limited. In this work, we take a step towards foundational understanding of transfer learning by focusing on binary classification with linear models and Gaussian features and develop statistical minimax lower bounds in terms of the number of source and target samples and an appropriate notion of similarity between source and target tasks. To derive this bound, we reduce the transfer learning problem to hypothesis testing via constructing a packing set of source and target parameters by exploiting Gilbert-Varshamov bound, which in turn leads to a lower bound on sample complexity. We also evaluate our theoretical results by experiments on real data sets. 
    more » « less
  3. null (Ed.)
    We study the transfer learning process between two linear regression problems. An important and timely special case is when the regressors are overparameterized and perfectly interpolate their training data. We examine a parameter transfer mechanism whereby a subset of the parameters of the target task solution are constrained to the values learned for a related source task. We analytically characterize the generalization error of the target task in terms of the salient factors in the transfer learning architecture, i.e., the number of examples available, the number of (free) parameters in each of the tasks, the number of parameters transferred from the source to target task, and the correlation between the two tasks. Our non-asymptotic analysis shows that the generalization error of the target task follows a two-dimensional double descent trend (with respect to the number of free parameters in each of the tasks) that is controlled by the transfer learning factors. Our analysis points to specific cases where the transfer of parameters is beneficial. Specifically, we show that transferring a specific set of parameters that generalizes well on the respective part of the source task can soften the demand on the task correlation level that is required for successful transfer learning. Moreover, we show that the usefulness of a transfer learning setting is fragile and depends on a delicate interplay among the set of transferred parameters, the relation between the tasks, and the true solution. 
    more » « less
  4. null (Ed.)
    Many real-world applications involve longitudinal data, consisting of observations of several variables, where different subsets of variables are sampled at irregularly spaced time points. We introduce the Longitudinal Gaussian Process Latent Variable Model (L-GPLVM), a variant of the Gaussian Process Latent Variable Model, for learning compact representations of such data. L-GPLVM overcomes a key limitation of the Dynamic Gaussian Process Latent Variable Model and its variants, which rely on the assumption that the data are fully observed over all of the sampled time points. We describe an effective approach to learning the parameters of L-GPLVM from sparse observations, by coupling the dynamical model with a Multitask Gaussian Process model for sampling of the missing observations at each step of the gradient-based optimization of the variational lower bound. We further show the advantage of the Sparse Process Convolution framework to learn the latent representation of sparsely and irregularly sampled longitudinal data with minimal computational overhead relative to a standard Latent Variable Model. We demonstrated experiments with synthetic data as well as variants of MOCAP data with varying degrees of sparsity of observations that show that L-GPLVM substantially and consistently outperforms the state-of-the-art alternatives in recovering the missing observations even when the available data exhibits a high degree of sparsity. The compact representations of irregularly sampled and sparse longitudinal data can be used to perform a variety of machine learning tasks, including clustering, classification, and regression. 
    more » « less
  5. Semantic segmentation algorithms, such as UNet, that rely on convolutional neural network (CNN)-based architectures, due to their ability to capture local textures and spatial context, have shown promise for anthropogenic geomorphic feature extraction when using land surface parameters (LSPs) derived from digital terrain models (DTMs) as input predictor variables. However, the operationalization of these supervised classification methods is limited by a lack of large volumes of quality training data. This study explores the use of transfer learning, where information learned from another, and often much larger, dataset is used to potentially reduce the need for a large, problem-specific training dataset. Two anthropogenic geomorphic feature extraction problems are explored: the extraction of agricultural terraces and the mapping of surface coal mine reclamation-related valley fill faces. Light detection and ranging (LiDAR)-derived DTMs were used to generate LSPs. We developed custom transfer parameters by attempting to predict geomorphon-based landforms using a large dataset of digital terrain data provided by the United States Geological Survey’s 3D Elevation Program (3DEP). We also explored the use of pre-trained ImageNet parameters and initializing models using parameters learned from the other mapping task investigated. The geomorphon-based transfer learning resulted in the poorest performance while the ImageNet-based parameters generally improved performance in comparison to a random parameter initialization, even when the encoder was frozen or not trained. Transfer learning between the different geomorphic datasets offered minimal benefits. We suggest that pre-trained models developed using large, image-based datasets may be of value for anthropogenic geomorphic feature extraction from LSPs even given the data and task disparities. More specifically, ImageNet-based parameters should be considered as an initialization state for the encoder component of semantic segmentation architectures applied to anthropogenic geomorphic feature extraction even when using non-RGB image-based predictor variables, such as LSPs. The value of transfer learning between the different geomorphic mapping tasks may have been limited due to smaller sample sizes, which highlights the need for continued research in using unsupervised and semi-supervised learning methods, especially given the large volume of digital terrain data available, despite the lack of associated labels. 
    more » « less