skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Visual Transfer for Reinforcement Learning via Wasserstein Domain Confusion
We introduce Wasserstein Adversarial Proximal Policy Optimization (WAPPO), a novel algorithm for visual transfer in Reinforcement Learning that explicitly learns to align the distributions of extracted features between a source and target task. WAPPO approximates and minimizes the Wasserstein-1 distance between the distributions of features from source and target domains via a novel Wasserstein Confusion objective. WAPPO outperforms the prior state-of-the-art in visual transfer and successfully transfers policies across Visual Cartpole and both the easy and hard settings of of 16 OpenAI Procgen environments.  more » « less
Award ID(s):
1717569
PAR ID:
10310141
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Wasserstein Barycenter is a principled approach to represent the weighted mean of a given set of probability distributions, utilizing the geometry induced by optimal transport. In this work, we present a novel scalable algorithm to approximate the Wasserstein Barycenters aiming at highdimensional applications in machine learning. Our proposed algorithm is based on the Kantorovich dual formulation of the Wasserstein-2 distance as well as a recent neural network architecture, input convex neural network, that is known to parametrize convex functions. The distinguishing features of our method are: i) it only requires samples from the marginal distributions; ii) unlike the existing approaches, it represents the Barycenter with a generative model and can thus generate infinite samples from the barycenter without querying the marginal distributions; iii) it works similar to Generative Adversarial Model in one marginal case. We demonstrate the efficacy of our algorithm by comparing it with the state-of-art methods in multiple experiments. 
    more » « less
  2. Classification models trained on data from one source may underperform when tested on data acquired from different sources due to shifts in data distributions, which limit the models’ generalizability in real-world applications. Domain adaptation methods proposed to align such shifts in source-target data distributions use contrastive learning or adversarial techniques with or without internal cluster alignment. The intracluster alignment is performed using standalone k-means clustering on image embedding. This paper introduces a novel deep clustering approach to align cluster distributions in tandem with adapting source and target data distributions. Our method learns and aligns a mixture of cluster distributions in the unlabeled target domain with those in the source domain in a unified deep representation learning framework. Experiments demonstrate that intra-cluster alignment improves classification accuracy in nine out of ten domain adaptation examples. These improvements range between 0.3% and 2.0% compared to k-means clustering of embedding and between 0.4% and 5.8% compared to methods without class-level alignment. Unlike current domain adaptation methods, the proposed cluster distribution-based deep learning provides a quantitative and explainable measure of distribution shifts in data domains. We have publicly shared the source code for the algorithm implementation. 
    more » « less
  3. null (Ed.)
    Convolutional Neural Network (CNN) models and many accessible large-scale public visual datasets have brought lots of research work to a remarkable new stage. Benefited from well-trained CNN models, small training datasets can learn comprehensive features by utilizing the preliminary features from transfer learning. However, the performance is not guaranteed when taking these features to construct a new model, as the differences always exist between the source and target domains. In this paper, we propose to build an Evolution Programming-based framework to address various challenges. This framework automates both the feature learning and model building processes. It first identifies the most valuable features from pre-trained models and then constructs a suitable model to understand the characteristic features for different tasks. Each model differs in numerous ways. Overall, the experimental results effectively reach optimal solutions, demonstrating that a time-consuming task could also be conducted by an automated process that exceeds the human ability. 
    more » « less
  4. Transfer learning refers to the transfer of knowledge or information from a relevant source domain to a target domain. However, most existing transfer learning theories and algorithms focus on IID tasks, where the source/target samples are assumed to be independent and identically distributed. Very little effort is devoted to theoretically studying the knowledge transferability on non-IID tasks, e.g., cross-network mining. To bridge the gap, in this paper, we propose rigorous generalization bounds and algorithms for cross-network transfer learning from a source graph to a target graph. The crucial idea is to characterize the cross-network knowledge transferability from the perspective of the Weisfeiler-Lehman graph isomorphism test. To this end, we propose a novel Graph Subtree Discrepancy to measure the graph distribution shift between source and target graphs. Then the generalization error bounds on cross-network transfer learning, including both cross-network node classification and link prediction tasks, can be derived in terms of the source knowledge and the Graph Subtree Discrepancy across domains. This thereby motivates us to propose a generic graph adaptive network (GRADE) to minimize the distribution shift between source and target graphs for cross-network transfer learning. Experimental results verify the effectiveness and efficiency of our GRADE framework on both cross-network node classification and cross-domain recommendation tasks. 
    more » « less
  5. Modern NLP applications have enjoyed a great boost utilizing neural networks models. Such deep neural models, however, are not applicable to most human languages due to the lack of annotated training data for various NLP tasks. Cross-lingual transfer learning (CLTL) is a viable method for building NLP models for a low-resource target language by leveraging labeled data from other (source) languages. In this work, we focus on the multilingual transfer setting where training data in multiple source languages is leveraged to further boost target language performance. Unlike most existing methods that rely only on language-invariant features for CLTL, our approach coherently utilizes both language invariant and language-specific features at instance level. Our model leverages adversarial networks to learn language-invariant features, and mixture-of-experts models to dynamically exploit the similarity between the target language and each individual source language1. This enables our model to learn effectively what to share between various languages in the multilingual setup. Moreover, when coupled with unsupervised multilingual embeddings, our model can operate in a zero-resource setting where neither target language training data nor cross-lingual resources are available. Our model achieves significant performance gains over prior art, as shown in an extensive set of experiments over multiple text classification and sequence tagging. 
    more » « less